|
In machine learning, the polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models. Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context of regression analysis, such combinations are known as interaction features. The (implicit) feature space of a polynomial kernel is equivalent to that of polynomial regression, but without the combinatorial blowup in the number of parameters to be learned. When the input features are binary-valued (booleans), then the features correspond to logical conjunctions of input features.〔Yoav Goldberg and Michael Elhadad (2008). splitSVM: Fast, Space-Efficient, non-Heuristic, Polynomial Kernel Computation for NLP Applications. Proc. ACL-08: HLT.〕 ==Definition== For degree- polynomials, the polynomial kernel is defined as〔http://www.cs.tufts.edu/~roni/Teaching/CLT/LN/lecture18.pdf〕 : where and are vectors in the ''input space'', i.e. vectors of features computed from training or test samples and is a free parameter trading off the influence of higher-order versus lower-order terms in the polynomial. When , the kernel is called homogeneous. (A further generalized polykernel divides by a user-specified scalar parameter .) As a kernel, corresponds to an inner product in a feature space based on some mapping : : The nature of can be seen from an example. Let , so we get the special case of the quadratic kernel. After using the multinomial theorem (twice—the outermost application is the binomial theorem) and regrouping, : From this it follows that the feature map is given by: : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Polynomial kernel」の詳細全文を読む スポンサード リンク
|